监测草原的健康和活力对于告知管理决策至关优化农业应用中的旋转放牧的态度至关重要。为了利用饲料资源,提高土地生产力,我们需要了解牧场的增长模式,这在最先进的状态下即可。在本文中,我们建议部署一个机器人团队来监测一个未知的牧场环境的演变,以实现上述目标。为了监测这种环境,通常会缓慢发展,我们需要设计一种以低成本在大面积上快速评估环境的策略。因此,我们提出了一种集成管道,包括数据综合,深度神经网络训练和预测以及一个间歇地监测牧场的多机器人部署算法。具体而言,使用与ROS Gazebo的新型数据综合耦合的专家知识的农业数据,我们首先提出了一种新的神经网络架构来学习环境的时空动态。这种预测有助于我们了解大规模上的牧场增长模式,并为未来做出适当的监测决策。基于我们的预测,我们设计了一个用于低成本监控的间歇多机器人部署策略。最后,我们将提议的管道与其他方法进行比较,从数据综合到预测和规划,以证实我们的管道的性能。
translated by 谷歌翻译
Individual-level data (microdata) that characterizes a population, is essential for studying many real-world problems. However, acquiring such data is not straightforward due to cost and privacy constraints, and access is often limited to aggregated data (macro data) sources. In this study, we examine synthetic data generation as a tool to extrapolate difficult-to-obtain high-resolution data by combining information from multiple easier-to-obtain lower-resolution data sources. In particular, we introduce a framework that uses a combination of univariate and multivariate frequency tables from a given target geographical location in combination with frequency tables from other auxiliary locations to generate synthetic microdata for individuals in the target location. Our method combines the estimation of a dependency graph and conditional probabilities from the target location with the use of a Gaussian copula to leverage the available information from the auxiliary locations. We perform extensive testing on two real-world datasets and demonstrate that our approach outperforms prior approaches in preserving the overall dependency structure of the data while also satisfying the constraints defined on the different variables.
translated by 谷歌翻译
For a number of tasks, such as 3D reconstruction, robotic interface, autonomous driving, etc., camera calibration is essential. In this study, we present a unique method for predicting intrinsic (principal point offset and focal length) and extrinsic (baseline, pitch, and translation) properties from a pair of images. We suggested a novel method where camera model equations are represented as a neural network in a multi-task learning framework, in contrast to existing methods, which build a comprehensive solution. By reconstructing the 3D points using a camera model neural network and then using the loss in reconstruction to obtain the camera specifications, this innovative camera projection loss (CPL) method allows us that the desired parameters should be estimated. As far as we are aware, our approach is the first one that uses an approach to multi-task learning that includes mathematical formulas in a framework for learning to estimate camera parameters to predict both the extrinsic and intrinsic parameters jointly. Additionally, we provided a new dataset named as CVGL Camera Calibration Dataset [1] which has been collected using the CARLA Simulator [2]. Actually, we show that our suggested strategy out performs both conventional methods and methods based on deep learning on 8 out of 10 parameters that were assessed using both real and synthetic data. Our code and generated dataset are available at https://github.com/thanif/Camera-Calibration-through-Camera-Projection-Loss.
translated by 谷歌翻译
数据所有者面临着对数据的使用如何损害不足的社区的责任。利益相关者希望确定导致算法偏向任何特定人口群体的数据的特征,例如,其种族,性别,年龄和/或宗教。具体而言,我们有兴趣识别特征空间的子集,在该特征空间中,从特征到观察到的结果之间的地面真相响应函数在人群组之间有所不同。为此,我们提出了一种决策树算法的森林,该算法产生了一个分数,该分数捕获个人的反应随敏感属性而变化的可能性。从经验上讲,我们发现我们的方法使我们能够识别出最有可能被几个分类器错误分类的个人,包括随机森林,逻辑回归,支持向量机和K-Neartivt Neighbors。我们方法的优点是,它允许利益相关者表征可能有助于歧视的风险样本,并使用预见来估计即将到来的样本的风险。
translated by 谷歌翻译
数据驱动的社会事件预测方法利用相关的历史信息来预测未来的事件。这些方法依赖于历史标记数据,并且当数据有限或质量差时无法准确地预测事件。研究事件之间的因果效应超出相关性分析,并且可以有助于更强大的事件预测。然而,由于若干因素,在数据驱动事件预测中纳入因果区分析是具有挑战性的:(i)事件发生在复杂和充满活力的社交环境中。许多未观察到的变量,即隐藏的混乱,影响潜在的原因和结果。 (ii)给予时尚非独立和相同分布的(非IID)数据,为准确的因果效应估计建模隐藏的混淆并不差。在这项工作中,我们介绍了一个深入的学习框架,将因果效应估计整合到事件预测中。我们首先研究了从时空属性的观察事件数据的单个治疗效果(ITE)估计的问题,并提出了一种新的因果推断模型来估计ites。然后,我们将学习的事件相关的因果信息纳入事件预测作为先验知识。引入了两个强大的学习模块,包括特征重载模块和近似约束损耗,以实现先验知识注入。我们通过将学习的因果信息送入不同的深度学习方法,评估了真实世界事件数据集的提出的因果推断模型,并验证了在事件预测中提出的强大学习模块的有效性。实验结果展示了社会事件中拟议的因果推断模型的强度,并展示了社会事件预测中强大的学习模块的有益特性。
translated by 谷歌翻译
在本文中,我们探索如何在互联网图像的数据和型号上构建,并使用它们适应机器人视觉,而无需任何额外的标签。我们提出了一个叫做自我监督体现的主动学习(密封)的框架。它利用互联网图像培训的感知模型来学习主动探索政策。通过3D一致性标记此探索策略收集的观察结果,并用于改善感知模型。我们构建并利用3D语义地图以完全自我监督的方式学习动作和感知。语义地图用于计算用于培训勘探政策的内在动机奖励,并使用时空3D一致性和标签传播标记代理观察。我们证明了密封框架可用于关闭动作 - 感知循环:通过在训练环境中移动,改善预读的感知模型的对象检测和实例分割性能,并且可以使用改进的感知模型来改善对象目标导航。
translated by 谷歌翻译
Camera calibration is a necessity in various tasks including 3D reconstruction, hand-eye coordination for a robotic interaction, autonomous driving, etc. In this work we propose a novel method to predict extrinsic (baseline, pitch, and translation), intrinsic (focal length and principal point offset) parameters using an image pair. Unlike existing methods, instead of designing an end-to-end solution, we proposed a new representation that incorporates camera model equations as a neural network in multi-task learning framework. We estimate the desired parameters via novel camera projection loss (CPL) that uses the camera model neural network to reconstruct the 3D points and uses the reconstruction loss to estimate the camera parameters. To the best of our knowledge, ours is the first method to jointly estimate both the intrinsic and extrinsic parameters via a multi-task learning methodology that combines analytical equations in learning framework for the estimation of camera parameters. We also proposed a novel dataset using CARLA Simulator. Empirically, we demonstrate that our proposed approach achieves better performance with respect to both deep learning-based and traditional methods on 8 out of 10 parameters evaluated using both synthetic and real data. Our code and generated dataset are available at https://github.com/thanif/Camera-Calibration-through-Camera-Projection-Loss.
translated by 谷歌翻译
For an autonomous agent to fulfill a wide range of user-specified goals at test time, it must be able to learn broadly applicable and general-purpose skill repertoires. Furthermore, to provide the requisite level of generality, these skills must handle raw sensory input such as images. In this paper, we propose an algorithm that acquires such general-purpose skills by combining unsupervised representation learning and reinforcement learning of goal-conditioned policies. Since the particular goals that might be required at test-time are not known in advance, the agent performs a self-supervised "practice" phase where it imagines goals and attempts to achieve them. We learn a visual representation with three distinct purposes: sampling goals for self-supervised practice, providing a structured transformation of raw sensory inputs, and computing a reward signal for goal reaching. We also propose a retroactive goal relabeling scheme to further improve the sample-efficiency of our method. Our off-policy algorithm is efficient enough to learn policies that operate on raw image observations and goals for a real-world robotic system, and substantially outperforms prior techniques. * Equal contribution. Order was determined by coin flip.
translated by 谷歌翻译